skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Hazimeh, Hussein"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In this work, we consider a setting where the goal is to achieve adversarial robustness on a target task, given only unlabeled training data from the task distribution, by leveraging a labeled training data from a different yet related source task distribution. The absence of the labels on training data for the target task poses a unique challenge as conventional adversarial robustness defenses cannot be directly applied. To address this challenge, we first bound the adversarial population 0-1 robust loss on the target task in terms of (i) empirical 0-1 loss on the source task, (ii) joint loss on source and target tasks of an ideal classifier, and (iii) a measure of worst-case domain divergence. Motivated by this bound, we develop a novel unified defense framework called Divergence-Aware adveRsarial Training (DART), which can be used in conjunction with a variety of standard UDA methods; e.g., DANN. DART is applicable to general threat models, including the popular \ell_p-norm model, and does not require heuristic regularizers or architectural changes. We also release DomainRobust, a testbed for evaluating robustness of UDA models to adversarial attacks. DomainRobust consists of 4 multidomain benchmark datasets (with 46 source-target pairs) and 7 meta-algorithms with a total of 11 variants. Our large-scale experiments demonstrate that, on average, DART significantly enhances model robustness on all benchmarks compared to the state of the art, while maintaining competitive standard accuracy. The relative improvement in robustness from DART reaches up to 29.2% on the source-target domain pairs considered. 
    more » « less
    Free, publicly-accessible full text available April 1, 2026
  2. null (Ed.)
    The L 0 -regularized least squares problem (a.k.a. best subsets) is central to sparse statistical learning and has attracted significant attention across the wider statistics, machine learning, and optimization communities. Recent work has shown that modern mixed integer optimization (MIO) solvers can be used to address small to moderate instances of this problem. In spite of the usefulness of L 0 -based estimators and generic MIO solvers, there is a steep computational price to pay when compared with popular sparse learning algorithms (e.g., based on L 1 regularization). In this paper, we aim to push the frontiers of computation for a family of L 0 -regularized problems with additional convex penalties. We propose a new hierarchy of necessary optimality conditions for these problems. We develop fast algorithms, based on coordinate descent and local combinatorial optimization, that are guaranteed to converge to solutions satisfying these optimality conditions. From a statistical viewpoint, an interesting story emerges. When the signal strength is high, our combinatorial optimization algorithms have an edge in challenging statistical settings. When the signal is lower, pure L 0 benefits from additional convex regularization. We empirically demonstrate that our family of L 0 -based estimators can outperform the state-of-the-art sparse learning algorithms in terms of a combination of prediction, estimation, and variable selection metrics under various regimes (e.g., different signal strengths, feature correlations, number of samples and features). Our new open-source sparse learning toolkit L0Learn (available on CRAN and GitHub) reaches up to a threefold speedup (with p up to 10 6 ) when compared with competing toolkits such as glmnet and ncvreg. 
    more » « less